Quasi-Bayes linear regression for sequential learning of hidden Markov models
نویسنده
چکیده
This paper presents an online/sequential linear regression adaptation framework for hidden Markov model (HMM) based speech recognition. Our attempt is to sequentially improve speaker-independent speech recognition system to handle the nonstationary environments via the linear regression adaptation of HMMs. A quasi-Bayes linear regression (QBLR) algorithm is developed to execute the sequential adaptation where the regression matrix is estimated using QB theory. In the estimation, we specify the prior density of regression matrix as a matrix variate normal distribution and derive the pooled posterior density belonging to the same distribution family. Accordingly, the optimal regression matrix can be easily calculated. Also, the reproducible prior/posterior pair provides a meaningful mechanism for sequential learning of prior statistics. At each sequential epoch, only the updated prior statistics and the current observed data are required for adaptation. The proposed QBLR is a general framework with maximum likelihood linear regression (MLLR) and maximum a posteriori linear regression (MAPLR) as special cases. Experiments on supervised and unsupervised speaker adaptation demonstrate that the sequential adaptation using QBLR is efficient and asymptotical to batch learning using MLLR and MAPLR in recognition performance.
منابع مشابه
Online speaker adaptation based on quasi-Bayes linear regression
This paper presents an online/sequential linear regression adaptation framework for hidden Markov model (HMM) based speech recognition. Our attempt is to sequentially improve speaker-independent (SI) speech recognizer to meet nonstationary environments via linear regression adaptation of SI HMM’s. A quasi-Bayes linear regression (QBLR) algorithm is developed to execute online adaptation where t...
متن کاملOnline adaptive learning of continuous-density hidden Markov models based on multiple-stream prior evolution and posterior pooling
We introduce a new adaptive Bayesian learning framework, called multiple-stream prior evolution and posterior pooling, for online adaptation of the continuous density hidden Markov model (CDHMM) parameters. Among three architectures we proposed for this framework, we study in detail a specific two-stream system where linear transformations are applied to the mean vectors of CDHMMs to control th...
متن کاملHTML-to-XML Migration by Means of Sequential Learning and Grammatical Inference
We consider the problem of document conversion from the layout-oriented HTML into a semanticoriented XML annotation. An important fragment of the conversion problem can be reduced to the sequential learning framework, where source tree leaves are labeled with XML tags. We review sequential learning methods developed for the NLP applications, including the Naive Bayes and Maximum Entropy. Then w...
متن کاملOptimizing Probabilistic Models for Relational Sequence Learning
This paper tackles the problem of relational sequence learning selecting relevant features elicited from a set of labelled sequences. Each relational sequence is firstly mapped into a feature vector using the result of a feature construction method. The second step finds an optimal subset of the constructed features that leads to high classification accuracy, by adopting a wrapper approach that...
متن کاملTitle On-line adaptation of the SCHMM parameters based on the segmental quasi-bayes learning for speech recognition
In this correspondence, on-line quasi-Bayes adaptation of the mixture coefficients and mean vectors in semicontinuous hidden Markov model (SCHMM) is studied. The viability of the proposed algorithm is confirmed and the related practical issues are addressed in a specific application of on-line speaker adaptation using a 26-word English alphabet vocabulary.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- IEEE Trans. Speech and Audio Processing
دوره 10 شماره
صفحات -
تاریخ انتشار 2002